Multimodal dialog in the car: combining speech and turn-and-push dial to control comfort functions
نویسندگان
چکیده
In this paper, we address the question how speech and tangible interfaces can be combined in order to provide effective multimodal interaction in vehicles, taking into account the special requirements induced by the circumstances of driving. Speech is used to set the interaction context (determine the object as is to be manipulated) and a turn-and-push dial is used to manipulate/adjust. An experimental study is presented that measures the distraction induced by manual (conventional), speech-only, and multimodal interaction (combination of speech and turnand-push dial). Results show that while subjects where able to perform more tasks in the manual condition, their driving was significantly safer while using speech-only or multimodal dialog. Supplemental contributions of this paper are descriptions of how a multimodal dialog manager as well as a driving simulation software are connected to the CAN (Controller Area Network) vehicle bus as well as how driver distraction caused by interacting with a system are measured using the standardized lane change task (LCT)
منابع مشابه
Commute UX: Voice Enabled In-car Infotainment System
Voice enabled dialog systems are well suited for in-car applications. Driving is an eyes-busy and hands-busy task and the only wideband communication channel left is speech. Such systems are in the midst of a transformation from a cool gadget to an integral part of the modern automobile. In this paper we highlight the major requirements for an in-car dialog system including usability during con...
متن کاملSpeech and sound for in-car infotainment systems
In a hands-busy and eyes-busy activity such as driving, spoken language technology is an important component of the multimodal human-machine interface (HMI) of an in-car infotainment system. Adding speech to the HMI introduces two distinct challenges: accurately acquiring the user’s speech in a noisy car environment, and creating a spoken dialog system that does not require the driver’s full at...
متن کاملGesture with Meaning
Embodied conversational agents (ECA) should exhibit nonverbal behaviors that are meaningfully related to their speech and mental state. This paper describes Cerebella, a system that automatically derives communicative functions from the text and audio of an utterance by combining lexical, acoustic, syntactic, semantic and rhetorical analyses. Communicative functions are then mapped to a multimo...
متن کاملSpecification and Realisation O in Dialogue Sys
We present a high level formalism for specifying verbal and nonverbal output from a multimodal dialogue system. The output specification is XML-based and provides information about communicative functions of the output without detailing the realisation of these functions. The specification can be used to control an animated character that uses speech and gestures. We give examples from an imple...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2010